Name | Description | Type | Package | Framework |
AbstractCounters | An abstract class to provide common implementation for the Counters container in both mapred and mapreduce packages. | Class | org.apache.hadoop.mapreduce.counters | Apache Hadoop |
|
AbstractDNSToSwitchMapping | This is a base class for DNS to Switch mappings. | Class | org.apache.hadoop.net | Apache Hadoop |
|
AbstractEvent | Parent class of all the events. | Class | org.apache.hadoop.yarn.event | Apache Hadoop |
|
AbstractFileSystem | This class provides an interface for implementors of a Hadoop file system (analogous to the VFS of Unix). | Class | org.apache.hadoop.fs | Apache Hadoop |
|
AbstractLivelinessMonitor | A simple liveliness monitor with which clients can register, trust the component to monitor liveliness, get a call-back on expiry and then finally | Class | org.apache.hadoop.yarn.util | Apache Hadoop |
|
AbstractMapWritable | Abstract base class for MapWritable and SortedMapWritable Unlike org. | Class | org.apache.hadoop.io | Apache Hadoop |
|
AbstractMetric | | Class | org.apache.hadoop.metrics2 | Apache Hadoop |
|
AbstractMetricsContext | The main class of the Service Provider Interface. | Class | org.apache.hadoop.metrics.spi | Apache Hadoop |
|
AbstractService | This is the base implementation class for services. | Class | org.apache.hadoop.service | Apache Hadoop |
|
AccessControlException | An exception class for access control related issues. | Class | org.apache.hadoop.fs.permission | Apache Hadoop |
|
AclEntry | Defines a single entry in an ACL. | Class | org.apache.hadoop.fs.permission | Apache Hadoop |
|
AclEntryScope | Specifies the scope or intended usage of an ACL entry. | Class | org.apache.hadoop.fs.permission | Apache Hadoop |
|
AclEntryType | Specifies the type of an ACL entry. | Class | org.apache.hadoop.fs.permission | Apache Hadoop |
|
AclStatus | An AclStatus contains the ACL information of a specific file. | Class | org.apache.hadoop.fs.permission | Apache Hadoop |
|
AddingCompositeService | Composite service that exports the add/remove methods. | Class | org.apache.hadoop.registry.server.services | Apache Hadoop |
|
AddressTypes | Enum of address types -as integers. | Interface | org.apache.hadoop.registry.client.types | Apache Hadoop |
|
AdminSecurityInfo | | Class | org.apache.hadoop.yarn.security.admin | Apache Hadoop |
|
AggregatedLogFormat | | Class | org.apache.hadoop.yarn.logaggregation | Apache Hadoop |
|
AHSClient | | Class | org.apache.hadoop.yarn.client.api | Apache Hadoop |
|
AHSProxy | | Class | org.apache.hadoop.yarn.client | Apache Hadoop |
|
AllocateRequest | The core request sent by the ApplicationMaster to the ResourceManager to obtain resources in the cluster. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
AllocateResponse | The response sent by the ResourceManager the ApplicationMaster during resource negotiation. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
AMCommand | Command sent by the Resource Manager to the Application Master in the See Also:AllocateResponse | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
AMRMClient | | Class | org.apache.hadoop.yarn.client.api | Apache Hadoop |
|
AMRMClientAsync | AMRMClientAsync handles communication with the ResourceManager and provides asynchronous updates on events such as container allocations and | Class | org.apache.hadoop.yarn.client.api.async | Apache Hadoop |
|
AMRMTokenIdentifier | AMRMTokenIdentifier is the TokenIdentifier to be used by ApplicationMasters to authenticate to the ResourceManager. | Class | org.apache.hadoop.yarn.security | Apache Hadoop |
|
AMRMTokenSelector | | Class | org.apache.hadoop.yarn.security | Apache Hadoop |
|
ApplicationAccessType | enum ApplicationAccessTypeApplication access types. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ApplicationAttemptId | ApplicationAttemptId denotes the particular attempt of an ApplicationMaster for a given ApplicationId. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ApplicationAttemptNotFoundException | This exception is thrown on (GetApplicationAttemptReportRequest) | Class | org.apache.hadoop.yarn.exceptions | Apache Hadoop |
|
ApplicationAttemptReport | ApplicationAttemptReport is a report of an application attempt. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ApplicationClassLoader | A URLClassLoader for application isolation. | Class | org.apache.hadoop.util | Apache Hadoop |
|
ApplicationClassLoader | This type has been deprecated in favor of ApplicationClassLoader. | Class | org.apache.hadoop.yarn.util | Apache Hadoop |
|
ApplicationClientProtocol | The protocol between clients and the ResourceManager to submit/abort jobs and to get information on applications, cluster metrics, | Interface | org.apache.hadoop.yarn.api | Apache Hadoop |
|
ApplicationConstants | This is the API for the applications comprising of constants that YARN sets up for the applications and the containers. | Interface | org.apache.hadoop.yarn.api | Apache Hadoop |
|
ApplicationHistoryProtocol | The protocol between clients and the ApplicationHistoryServer to get the information of completed applications etc. | Interface | org.apache.hadoop.yarn.api | Apache Hadoop |
|
ApplicationId | ApplicationId represents the globally unique identifier for an application. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ApplicationIdNotProvidedException | Exception to be thrown when Client submit an application without providing ApplicationId in ApplicationSubmissionContext. | Class | org.apache.hadoop.yarn.exceptions | Apache Hadoop |
|
ApplicationMaster | An ApplicationMaster for executing shell commands on a set of launched containers using the YARN framework. | Class | org.apache.hadoop.yarn.applications.distributedshell | Apache Hadoop |
|
ApplicationMasterProtocol | The protocol between a live instance of ApplicationMaster and the ResourceManager. | Interface | org.apache.hadoop.yarn.api | Apache Hadoop |
|
ApplicationNotFoundException | This exception is thrown on (GetApplicationReportRequest) API | Class | org.apache.hadoop.yarn.exceptions | Apache Hadoop |
|
ApplicationReport | ApplicationReport is a report of an application. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ApplicationResourceUsageReport | Contains various scheduling metrics to be reported by UI and CLI. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ApplicationsRequestScope | enum ApplicationsRequestScopeEnumeration that controls the scope of applications fetched | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
ApplicationSubmissionContext | ApplicationSubmissionContext represents all of the information needed by the ResourceManager to launch | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ArrayFile | A dense file-based mapping from integers to values. | Class | org.apache.hadoop.io | Apache Hadoop |
|
ArrayListBackedIterator | This class provides an implementation of ResetableIterator. | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
ArrayListBackedIterator | This class provides an implementation of ResetableIterator. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
ArrayPrimitiveWritable | This is a wrapper class. | Class | org.apache.hadoop.io | Apache Hadoop |
|
ArrayWritable | A Writable for arrays containing instances of a class. | Class | org.apache.hadoop.io | Apache Hadoop |
|
AsyncDispatcher | Dispatches Events in a separate thread. | Class | org.apache.hadoop.yarn.event | Apache Hadoop |
|
AvroFSInput | Adapts an FSDataInputStream to Avro's SeekableInput interface. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
AvroReflectSerialization | Serialization for Avro Reflect classes. | Class | org.apache.hadoop.io.serializer.avro | Apache Hadoop |
|
AvroSerialization | Base class for providing serialization to Avro types. | Class | org.apache.hadoop.io.serializer.avro | Apache Hadoop |
|
AvroSpecificSerialization | Serialization for Avro Specific classes. | Class | org.apache.hadoop.io.serializer.avro | Apache Hadoop |
|
AzureException | Thrown if there is a problem communicating with Azure Storage service. | Class | org.apache.hadoop.fs.azure | Apache Hadoop |
|
AzureFileSystemInstrumentation | A metrics source for the WASB file system to track all the metrics we care about for getting a clear picture of the performance/reliability/interaction | Class | org.apache.hadoop.fs.azure.metrics | Apache Hadoop |
|
BadFencingConfigurationException | Indicates that the operator has specified an invalid configuration for fencing methods. | Class | org.apache.hadoop.ha | Apache Hadoop |
|
BaseClientToAMTokenSecretManager | A base SecretManager for AMs to extend and validate Client-RM tokens issued to clients by the RM using the underlying master-key shared by RM to | Class | org.apache.hadoop.yarn.security.client | Apache Hadoop |
|
BigDecimalSplitter | | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
BinaryComparable | Interface supported by WritableComparable types supporting ordering/permutation by a representative set of bytes. | Class | org.apache.hadoop.io | Apache Hadoop |
|
BinaryPartitioner | Partition BinaryComparable keys using a configurable part of the bytes array returned by BinaryComparable. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
BinaryPartitioner | Partition BinaryComparable keys using a configurable part of the bytes array returned by BinaryComparable. | Class | org.apache.hadoop.mapreduce.lib.partition | Apache Hadoop |
|
BinaryRecordInput | | Class | org.apache.hadoop.record | Apache Hadoop |
|
BinaryRecordOutput | | Class | org.apache.hadoop.record | Apache Hadoop |
|
BindFlags | Combinable Flags to use when creating a service entry. | Interface | org.apache.hadoop.registry.client.api | Apache Hadoop |
|
BindingInformation | | Class | org.apache.hadoop.registry.client.impl.zk | Apache Hadoop |
|
BlockCompressorStream | A CompressorStream which works with 'block-based' based compression algorithms, as opposed to | Class | org.apache.hadoop.io.compress | Apache Hadoop |
|
BlockDecompressorStream | A DecompressorStream which works with 'block-based' based compression algorithms, as opposed to | Class | org.apache.hadoop.io.compress | Apache Hadoop |
|
BlockLocation | Represents the network location of a block, information about the hosts that contain block replicas, and other block metadata (E. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
BlockStorageLocation | Wrapper for BlockLocation that also adds VolumeId volume location information for each replica. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
BloomFilter | The Bloom filter is a data structure that was introduced in 1970 and that has been adopted by the networking research community in the past decade thanks to the bandwidth efficiencies that it | Class | org.apache.hadoop.util.bloom | Apache Hadoop |
|
BloomMapFile | This class extends MapFile and provides very much the same functionality. | Class | org.apache.hadoop.io | Apache Hadoop |
|
BooleanSplitter | | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
BooleanWritable | A WritableComparable for booleans. | Class | org.apache.hadoop.io | Apache Hadoop |
|
Buffer | A byte sequence that is used as a Java native type for buffer. | Class | org.apache.hadoop.record | Apache Hadoop |
|
ByteBufferPool | | Interface | org.apache.hadoop.io | Apache Hadoop |
|
BytesWritable | A byte sequence that is usable as a key or value. | Class | org.apache.hadoop.io | Apache Hadoop |
|
ByteWritable | A WritableComparable for a single byte. | Class | org.apache.hadoop.io | Apache Hadoop |
|
BZip2Codec | This class provides output and input streams for bzip2 compression and decompression. | Class | org.apache.hadoop.io.compress | Apache Hadoop |
|
CachedDNSToSwitchMapping | A cached implementation of DNSToSwitchMapping that takes an raw DNSToSwitchMapping and stores the resolved network location in | Class | org.apache.hadoop.net | Apache Hadoop |
|
CacheFlag | Specifies semantics for CacheDirective operations. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
CancelDelegationTokenRequest | | Interface | org.apache.hadoop.mapreduce.v2.api.protocolrecords | Apache Hadoop |
|
CanSetDropBehind | | Interface | org.apache.hadoop.fs | Apache Hadoop |
|
CanSetReadahead | | Interface | org.apache.hadoop.fs | Apache Hadoop |
|
ChainMapper | The ChainMapper class allows to use multiple Mapper classes within a single The Mapper classes are invoked in a chained (or piped) fashion, the output of | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
ChainMapper | The ChainMapper class allows to use multiple Mapper classes within a single The Mapper classes are invoked in a chained (or piped) fashion, the output of | Class | org.apache.hadoop.mapreduce.lib.chain | Apache Hadoop |
|
ChainReducer | The ChainReducer class allows to chain multiple Mapper classes after a Reducer within the Reducer task. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
ChainReducer | The ChainReducer class allows to chain multiple Mapper classes after a Reducer within the Reducer task. | Class | org.apache.hadoop.mapreduce.lib.chain | Apache Hadoop |
|
Checkpointable | Contract representing to the framework that the task can be safely preempted and restarted between invocations of the user-defined function. | Class | org.apache.hadoop.mapreduce.task.annotation | Apache Hadoop |
|
ChecksumException | Thrown for checksum errors. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
ChecksumFileSystem | Abstract Checksumed FileSystem. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
CLI | | Class | org.apache.hadoop.mapreduce.tools | Apache Hadoop |
|
Client | Client for Distributed Shell application submission to YARN. | Class | org.apache.hadoop.yarn.applications.distributedshell | Apache Hadoop |
|
ClientRMProxy | | Class | org.apache.hadoop.yarn.client | Apache Hadoop |
|
ClientRMSecurityInfo | | Class | org.apache.hadoop.yarn.security.client | Apache Hadoop |
|
ClientSCMProtocol | The protocol between clients and the SharedCacheManager to claim and release resources in the shared cache. | Interface | org.apache.hadoop.yarn.api | Apache Hadoop |
|
ClientTimelineSecurityInfo | | Class | org.apache.hadoop.yarn.security.client | Apache Hadoop |
|
ClientToAMTokenIdentifier | | Class | org.apache.hadoop.yarn.security.client | Apache Hadoop |
|
ClientToAMTokenSecretManager | A simple SecretManager for AMs to validate Client-RM tokens issued to clients by the RM using the underlying master-key shared by RM to the AMs on | Class | org.apache.hadoop.yarn.security.client | Apache Hadoop |
|
Cluster | Provides a way to access information about the map/reduce cluster. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
ClusterMetrics | Status information on the current state of the Map-Reduce cluster. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
ClusterStatus | Status information on the current state of the Map-Reduce cluster. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
CodeBuffer | | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
CodecPool | A global compressor/decompressor pool used to save and reuse (possibly native) compression/decompression codecs. | Class | org.apache.hadoop.io.compress | Apache Hadoop |
|
CombineFileInputFormat | An abstract InputFormat that returns CombineFileSplit's in InputFormat. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
CombineFileInputFormat | An abstract InputFormat that returns CombineFileSplit's in InputFormat. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
CombineFileRecordReader | A generic RecordReader that can hand out different recordReaders for each chunk in a CombineFileSplit. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
CombineFileRecordReader | A generic RecordReader that can hand out different recordReaders for each chunk in a CombineFileSplit. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
CombineFileRecordReaderWrapper | A wrapper class for a record reader that handles a single file split. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
CombineFileRecordReaderWrapper | A wrapper class for a record reader that handles a single file split. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
CombineFileSplit | | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
CombineFileSplit | A sub-collection of input files. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
CombineSequenceFileInputFormat | Input format that is a CombineFileInputFormat-equivalent for SequenceFileInputFormat. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
CombineSequenceFileInputFormat | Input format that is a CombineFileInputFormat-equivalent for SequenceFileInputFormat. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
CombineTextInputFormat | Input format that is a CombineFileInputFormat-equivalent forSee Also:CombineFileInputFormat | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
CombineTextInputFormat | Input format that is a CombineFileInputFormat-equivalent forSee Also:CombineFileInputFormat | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
CommonConfigurationKeysPublic | This class contains constants for configuration keys used in the common code. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
ComposableInputFormat | Refinement of InputFormat requiring implementors to provide ComposableRecordReader instead of RecordReader. | Interface | org.apache.hadoop.mapred.join | Apache Hadoop |
|
ComposableInputFormat | Refinement of InputFormat requiring implementors to provide ComposableRecordReader instead of RecordReader. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
ComposableRecordReader | Additional operations required of a RecordReader to participate in a join. | Interface | org.apache.hadoop.mapred.join | Apache Hadoop |
|
ComposableRecordReader | Additional operations required of a RecordReader to participate in a join. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
CompositeContext | | Class | org.apache.hadoop.metrics.spi | Apache Hadoop |
|
CompositeInputFormat | An InputFormat capable of performing joins over a set of data sources sorted and partitioned the same way. | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
CompositeInputFormat | An InputFormat capable of performing joins over a set of data sources sorted and partitioned the same way. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
CompositeInputSplit | This InputSplit contains a set of child InputSplits. | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
CompositeInputSplit | This InputSplit contains a set of child InputSplits. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
CompositeRecordReader | A RecordReader that can effect joins of RecordReaders sharing a common key type and partitioning. | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
CompositeRecordReader | A RecordReader that can effect joins of RecordReaders sharing a common key type and partitioning. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
CompositeService | Composition of services. | Class | org.apache.hadoop.service | Apache Hadoop |
|
CompressedWritable | A base-class for Writables which store themselves compressed and lazily inflate on field access. | Class | org.apache.hadoop.io | Apache Hadoop |
|
CompressionCodec | This class encapsulates a streaming compression/decompression pair. | Interface | org.apache.hadoop.io.compress | Apache Hadoop |
|
CompressionCodecFactory | A factory that will find the correct codec for a given filename. | Class | org.apache.hadoop.io.compress | Apache Hadoop |
|
CompressionInputStream | A compression input stream. | Class | org.apache.hadoop.io.compress | Apache Hadoop |
|
CompressionOutputStream | A compression output stream. | Class | org.apache.hadoop.io.compress | Apache Hadoop |
|
Compressor | Specification of a stream-based 'compressor' which can be plugged into a CompressionOutputStream to compress data. | Interface | org.apache.hadoop.io.compress | Apache Hadoop |
|
CompressorStream | | Class | org.apache.hadoop.io.compress | Apache Hadoop |
|
Configurable | Something that may be configured with a Configuration. | Interface | org.apache.hadoop.conf | Apache Hadoop |
|
Configuration | Provides access to configuration parameters. | Class | org.apache.hadoop.conf | Apache Hadoop |
|
Configured | Base class for things that may be configured with a Configuration. | Class | org.apache.hadoop.conf | Apache Hadoop |
|
ConnectTimeoutException | Thrown by NetUtils. | Class | org.apache.hadoop.net | Apache Hadoop |
|
Consts | | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
Container | Container represents an allocated resource in the cluster. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ContainerExitStatus | Container exit statuses indicating special exit circumstances. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ContainerId | ContainerId represents a globally unique identifier for a Container in the cluster. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ContainerLaunchContext | ContainerLaunchContext represents all of the information needed by the NodeManager to launch a container. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ContainerLogAppender | A simple log4j-appender for container's logs. | Class | org.apache.hadoop.yarn | Apache Hadoop |
|
ContainerManagementProtocol | The protocol between an ApplicationMaster and a NodeManager to start/stop containers and to get status | Interface | org.apache.hadoop.yarn.api | Apache Hadoop |
|
ContainerManagerSecurityInfo | | Class | org.apache.hadoop.yarn.security | Apache Hadoop |
|
ContainerNotFoundException | This exception is thrown on (GetContainerReportRequest) | Class | org.apache.hadoop.yarn.exceptions | Apache Hadoop |
|
ContainerReport | ContainerReport is a report of an container. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ContainerResourceIncreaseRequest | | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ContainerRollingLogAppender | A simple log4j-appender for container's logs. | Class | org.apache.hadoop.yarn | Apache Hadoop |
|
ContainerState | State of a Container. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ContainerStatus | ContainerStatus represents the current status of a It provides details such as: | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ContainerTokenIdentifier | TokenIdentifier for a container. | Class | org.apache.hadoop.yarn.security | Apache Hadoop |
|
ContainerTokenSelector | | Class | org.apache.hadoop.yarn.security | Apache Hadoop |
|
ContentSummary | Store the summary of a content (a directory or a file). | Class | org.apache.hadoop.fs | Apache Hadoop |
|
ControlledJob | This class encapsulates a MapReduce job and its dependency. | Class | org.apache.hadoop.mapreduce.lib.jobcontrol | Apache Hadoop |
|
Counter | A named counter that tracks the progress of a map/reduce job. | Interface | org.apache.hadoop.mapreduce | Apache Hadoop |
|
CounterGroup | A group of Counters that logically belong together. | Interface | org.apache.hadoop.mapreduce | Apache Hadoop |
|
CounterGroupBase | The common counter group interface. | Interface | org.apache.hadoop.mapreduce.counters | Apache Hadoop |
|
Counters | A set of named counters. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
Counters | Counters holds per job/task counters, defined either by the Map-Reduce framework or applications. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
CountingBloomFilter | A counting Bloom filter is an improvement to standard a Bloom filter as it allows dynamic additions and deletions of set membership information. | Class | org.apache.hadoop.util.bloom | Apache Hadoop |
|
CreateFlag | CreateFlag specifies the file create semantic. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
CredentialProvider | A provider of credentials or password for Hadoop applications. | Class | org.apache.hadoop.security.alias | Apache Hadoop |
|
CredentialProviderFactory | A factory to create a list of CredentialProvider based on the path given in a Configuration. | Class | org.apache.hadoop.security.alias | Apache Hadoop |
|
CsvRecordInput | | Class | org.apache.hadoop.record | Apache Hadoop |
|
CsvRecordOutput | | Class | org.apache.hadoop.record | Apache Hadoop |
|
DataDrivenDBInputFormat | A InputFormat that reads input data from an SQL table. | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
DataDrivenDBRecordReader | A RecordReader that reads records from a SQL table, using data-driven WHERE clause splits. | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
DataOutputOutputStream | OutputStream implementation that wraps a DataOutput. | Class | org.apache.hadoop.io | Apache Hadoop |
|
DateSplitter | | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
DBConfiguration | | Class | org.apache.hadoop.mapred.lib.db | Apache Hadoop |
|
DBConfiguration | A container for configuration property names for jobs with DB input/output. | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
DBInputFormat | | Class | org.apache.hadoop.mapred.lib.db | Apache Hadoop |
|
DBInputFormat | A InputFormat that reads input data from an SQL table. | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
DBOutputFormat | | Class | org.apache.hadoop.mapred.lib.db | Apache Hadoop |
|
DBOutputFormat | A OutputFormat that sends the reduce output to a SQL table. | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
DBRecordReader | A RecordReader that reads records from a SQL table. | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
DBSplitter | DBSplitter will generate DBInputSplits to use with DataDrivenDBInputFormat. | Interface | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
DBWritable | Objects that are read from/written to a database should implement DBWritable. | Interface | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
Decompressor | Specification of a stream-based 'de-compressor' which can be plugged into a CompressionInputStream to compress data. | Interface | org.apache.hadoop.io.compress | Apache Hadoop |
|
DecompressorStream | | Class | org.apache.hadoop.io.compress | Apache Hadoop |
|
DefaultCodec | | Class | org.apache.hadoop.io.compress | Apache Hadoop |
|
DefaultMetricsSystem | enum DefaultMetricsSystemThe default metrics system singleton | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
DefaultStringifier | DefaultStringifier is the default implementation of the Stringifier interface which stringifies the objects using base64 encoding of the | Class | org.apache.hadoop.io | Apache Hadoop |
|
DelegationTokenAuthenticatedURL | The DelegationTokenAuthenticatedURL is a AuthenticatedURL sub-class with built-in Hadoop Delegation Token | Class | org.apache.hadoop.security.token.delegation.web | Apache Hadoop |
|
DelegationTokenAuthenticator | Authenticator wrapper that enhances an Authenticator with Delegation Token support. | Class | org.apache.hadoop.security.token.delegation.web | Apache Hadoop |
|
DirectDecompressionCodec | This class encapsulates a codec which can decompress direct bytebuffers. | Interface | org.apache.hadoop.io.compress | Apache Hadoop |
|
DirectDecompressor | Specification of a direct ByteBuffer 'de-compressor'. | Interface | org.apache.hadoop.io.compress | Apache Hadoop |
|
Dispatcher | Event Dispatcher interface. | Interface | org.apache.hadoop.yarn.event | Apache Hadoop |
|
DistributedCache | Distribute application-specific large, read-only files efficiently. | Class | org.apache.hadoop.filecache | Apache Hadoop |
|
DNSToSwitchMapping | An interface that must be implemented to allow pluggable DNS-name/IP-address to RackID resolvers. | Interface | org.apache.hadoop.net | Apache Hadoop |
|
DoubleValueSum | | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
DoubleValueSum | | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
DoubleWritable | Writable for Double values. | Class | org.apache.hadoop.io | Apache Hadoop |
|
DSConstants | | Class | org.apache.hadoop.yarn.applications.distributedshell | Apache Hadoop |
|
DynamicBloomFilter | A dynamic Bloom filter (DBF) makes use of a s * m bit matrix but each of the s rows is a standard Bloom filter. | Class | org.apache.hadoop.util.bloom | Apache Hadoop |
|
ElasticByteBufferPool | This is a simple ByteBufferPool which just creates ByteBuffers as needed. | Class | org.apache.hadoop.io | Apache Hadoop |
|
Endpoint | Description of a single service/component endpoint. | Class | org.apache.hadoop.registry.client.types | Apache Hadoop |
|
EnumSetWritable | A Writable wrapper for EnumSet. | Class | org.apache.hadoop.io | Apache Hadoop |
|
Event | Interface defining events api. | Interface | org.apache.hadoop.yarn.event | Apache Hadoop |
|
EventCounter | A log4J Appender that simply counts logging events in three levels: fatal, error and warn. | Class | org.apache.hadoop.log.metrics | Apache Hadoop |
|
FailoverFailedException | Exception thrown to indicate service failover has failed. | Class | org.apache.hadoop.ha | Apache Hadoop |
|
FenceMethod | A fencing method is a method by which one node can forcibly prevent another node from making continued progress. | Interface | org.apache.hadoop.ha | Apache Hadoop |
|
FieldSelectionHelper | This class implements a mapper/reducer class that can be used to perform field selections in a manner similar to unix cut. | Class | org.apache.hadoop.mapreduce.lib.fieldsel | Apache Hadoop |
|
FieldSelectionMapper | This class implements a mapper class that can be used to perform field selections in a manner similar to unix cut. | Class | org.apache.hadoop.mapreduce.lib.fieldsel | Apache Hadoop |
|
FieldSelectionMapReduce | This class implements a mapper/reducer class that can be used to perform field selections in a manner similar to unix cut. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
FieldSelectionReducer | This class implements a reducer class that can be used to perform field selections in a manner similar to unix cut. | Class | org.apache.hadoop.mapreduce.lib.fieldsel | Apache Hadoop |
|
FieldTypeInfo | Represents a type information for a field, which is made up of its ID (name) and its type (a TypeID object). | Class | org.apache.hadoop.record.meta | Apache Hadoop |
|
FileAlreadyExistsException | Used when target file already exists for any operation and is not configured to be overwritten. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
FileAlreadyExistsException | Used when target file already exists for any operation and is not configured to be overwritten. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
FileChecksum | An abstract class representing file checksums for files. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
FileContext | The FileContext class provides an interface to the application writer for using the Hadoop file system. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
FileInputFormat | A base class for file-based InputFormat. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
FileInputFormat | A base class for file-based InputFormats. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
FileInputFormatCounter | enum FileInputFormatCounterEnum Constant Summary | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
FileOutputCommitter | An OutputCommitter that commits files specified in job output directory i. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
FileOutputCommitter | An OutputCommitter that commits files specified in job output directory i. | Class | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
FileOutputFormat | A base class for OutputFormat. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
FileOutputFormat | A base class for OutputFormats that read from FileSystems. | Class | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
FileOutputFormatCounter | enum FileOutputFormatCounterEnum Constant Summary | Class | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
FileSink | | Class | org.apache.hadoop.metrics2.sink | Apache Hadoop |
|
FileSplit | A section of an input file. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
FileSplit | A section of an input file. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
FileStatus | Interface that represents the client side information for a file. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
FileSystem | An abstract base class for a fairly generic filesystem. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
FileUtil | | Class | org.apache.hadoop.fs | Apache Hadoop |
|
FilterFileSystem | A FilterFileSystem contains some other file system, which it uses as | Class | org.apache.hadoop.fs | Apache Hadoop |
|
FilterOutputFormat | FilterOutputFormat is a convenience class that wraps OutputFormat. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
FilterOutputFormat | FilterOutputFormat is a convenience class that wraps OutputFormat. | Class | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
FinalApplicationStatus | enum FinalApplicationStatusEnumeration of various final states of an Application. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
FinishApplicationMasterRequest | The finalization request sent by the ApplicationMaster to inform the ResourceManager about its completion. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
FinishApplicationMasterResponse | The response sent by the ResourceManager to a ApplicationMaster on it's completion. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
FixedLengthInputFormat | FixedLengthInputFormat is an input format used to read input files which contain fixed length records. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
FixedLengthInputFormat | FixedLengthInputFormat is an input format used to read input files which contain fixed length records. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
FloatSplitter | | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
FloatWritable | A WritableComparable for floats. | Class | org.apache.hadoop.io | Apache Hadoop |
|
FsAction | File system actions, e. | Class | org.apache.hadoop.fs.permission | Apache Hadoop |
|
FsConstants | FileSystem related constants. | Interface | org.apache.hadoop.fs | Apache Hadoop |
|
FSDataInputStream | Utility that wraps a FSInputStream in a DataInputStream and buffers input through a BufferedInputStream. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
FSDataOutputStream | Utility that wraps a OutputStream in a DataOutputStream. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
FSError | Thrown for unexpected filesystem errors, presumed to reflect disk errors in the native filesystem. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
FsPermission | A class for file/directory permissions. | Class | org.apache.hadoop.fs.permission | Apache Hadoop |
|
FsServerDefaults | Provides server default configuration values to clients. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
FsStatus | | Class | org.apache.hadoop.fs | Apache Hadoop |
|
FTPException | A class to wrap a Throwable into a Runtime Exception. | Class | org.apache.hadoop.fs.ftp | Apache Hadoop |
|
FTPFileSystem | A FileSystem backed by an FTP client provided by Apache Commons Net. | Class | org.apache.hadoop.fs.ftp | Apache Hadoop |
|
GangliaContext | Context for sending metrics to Ganglia. | Class | org.apache.hadoop.metrics.ganglia | Apache Hadoop |
|
GenericWritable | A wrapper for Writable instances. | Class | org.apache.hadoop.io | Apache Hadoop |
|
GetApplicationAttemptReportRequest | The request sent by a client to the ResourceManager to get an ApplicationAttemptReport for an application attempt. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetApplicationAttemptReportResponse | The response sent by the ResourceManager to a client requesting an application attempt report. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetApplicationAttemptsRequest | The request from clients to get a list of application attempt reports of an application from the ResourceManager. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetApplicationAttemptsResponse | The response sent by the ResourceManager to a client requesting a list of ApplicationAttemptReport for application attempts. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetApplicationReportRequest | The request sent by a client to the ResourceManager to get an ApplicationReport for an application. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetApplicationReportResponse | The response sent by the ResourceManager to a client requesting an application report. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetApplicationsRequest | The request from clients to get a report of Applications in the cluster from the ResourceManager. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetApplicationsResponse | The response sent by the ResourceManager to a client requesting an ApplicationReport for applications. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetClusterMetricsRequest | The request sent by clients to get cluster metrics from the Currently, this is empty. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetClusterMetricsResponse | The response sent by the ResourceManager to a client requesting cluster metrics. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetClusterNodeLabelsRequest | | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetClusterNodeLabelsResponse | | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetClusterNodesRequest | The request from clients to get a report of all nodes in the cluster from the ResourceManager. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetClusterNodesResponse | The response sent by the ResourceManager to a client requesting a NodeReport for all nodes. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetContainerReportRequest | The request sent by a client to the ResourceManager to get an ContainerReport for a container. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetContainerReportResponse | The response sent by the ResourceManager to a client requesting a container report. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetContainersRequest | The request from clients to get a list of container reports, which belong to an application attempt from the ResourceManager. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetContainersResponse | The response sent by the ResourceManager to a client requesting a list of ContainerReport for containers. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetContainerStatusesRequest | The request sent by the ApplicationMaster to the NodeManager to get ContainerStatus of requested | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetContainerStatusesResponse | The response sent by the NodeManager to the ApplicationMaster when asked to obtain the | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetDelegationTokenRequest | | Interface | org.apache.hadoop.mapreduce.v2.api.protocolrecords | Apache Hadoop |
|
GetDelegationTokenRequest | The request issued by the client to get a delegation token from the ResourceManager. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetDelegationTokenResponse | Response to a GetDelegationTokenRequest request from the client. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetNewApplicationRequest | The request sent by clients to get a new ApplicationId for submitting an application. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetNewApplicationResponse | The response sent by the ResourceManager to the client for a request to get a new ApplicationId for submitting applications. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetQueueInfoRequest | The request sent by clients to get queue information from the ResourceManager. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetQueueInfoResponse | The response sent by the ResourceManager to a client requesting information about queues in the system. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetQueueUserAclsInfoRequest | The request sent by clients to the ResourceManager to get queue acls for the current user. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GetQueueUserAclsInfoResponse | The response sent by the ResourceManager to clients seeking queue acls for the user. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
GlobFilter | A filter for POSIX glob pattern with brace expansions. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
GlobFilter | A glob pattern filter for metrics. | Class | org.apache.hadoop.metrics2.filter | Apache Hadoop |
|
GraphiteSink | | Class | org.apache.hadoop.metrics2.sink | Apache Hadoop |
|
GroupMappingServiceProvider | | Interface | org.apache.hadoop.security | Apache Hadoop |
|
GzipCodec | This class creates gzip compressors/decompressors. | Class | org.apache.hadoop.io.compress | Apache Hadoop |
|
HadoopIllegalArgumentException | Indicates that a method has been passed illegal or invalid argument. | Class | org.apache.hadoop | Apache Hadoop |
|
HAServiceProtocol | Protocol interface that provides High Availability related primitives to monitor and fail-over the service. | Interface | org.apache.hadoop.ha | Apache Hadoop |
|
HAServiceProtocolHelper | Helper for making HAServiceProtocol RPC calls. | Class | org.apache.hadoop.ha | Apache Hadoop |
|
HAServiceProtocolPB | | Interface | org.apache.hadoop.ha.protocolPB | Apache Hadoop |
|
HAServiceTarget | Represents a target of the client side HA administration commands. | Class | org.apache.hadoop.ha | Apache Hadoop |
|
HashFunction | See Also:The general behavior of a key being stored in a filter, The general behavior of a filter | Class | org.apache.hadoop.util.bloom | Apache Hadoop |
|
HashPartitioner | Partition keys by their Object. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
HashPartitioner | Partition keys by their Object. | Class | org.apache.hadoop.mapreduce.lib.partition | Apache Hadoop |
|
HdfsVolumeId | HDFS-specific volume identifier which implements VolumeId. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
HealthCheckFailedException | Exception thrown to indicate that health check of a service failed. | Class | org.apache.hadoop.ha | Apache Hadoop |
|
HistoryFileManager | | Class | org.apache.hadoop.mapreduce.v2.hs | Apache Hadoop |
|
HistoryStorage | Provides an API to query jobs that have finished. | Interface | org.apache.hadoop.mapreduce.v2.hs | Apache Hadoop |
|
ID | A general identifier, which internally stores the id as an integer. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
ID | A general identifier, which internally stores the id as an integer. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
IdentityMapper | | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
IdentityReducer | Performs no reduction, writing all input values directly to the output. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
IdMappingServiceProvider | | Interface | org.apache.hadoop.security | Apache Hadoop |
|
Index | Interface that acts as an iterator for deserializing maps. | Class | org.apache.hadoop.record | Apache Hadoop |
|
InnerJoinRecordReader | | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
InnerJoinRecordReader | | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
InputFormat | InputFormat describes the input-specification for a The Map-Reduce framework relies on the InputFormat of the | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
InputFormat | InputFormat describes the input-specification for a The Map-Reduce framework relies on the InputFormat of the | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
InputSampler | | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
InputSampler | Utility for collecting samples and writing a partition file for TotalOrderPartitioner. | Class | org.apache.hadoop.mapreduce.lib.partition | Apache Hadoop |
|
InputSplit | InputSplit represents the data to be processed by an individual Mapper. | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
InputSplit | InputSplit represents the data to be processed by an individual Mapper. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
InputSplitWithLocationInfo | | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
IntegerSplitter | | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
InterfaceAudience | Annotation to inform users of a package, class or method's intended audience. | Class | org.apache.hadoop.classification | Apache Hadoop |
|
InterfaceStability | Annotation to inform users of how much to rely on a particular package, class or method not changing over time. | Class | org.apache.hadoop.classification | Apache Hadoop |
|
Interns | | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
IntSumReducer | | Class | org.apache.hadoop.mapreduce.lib.reduce | Apache Hadoop |
|
IntWritable | A WritableComparable for ints. | Class | org.apache.hadoop.io | Apache Hadoop |
|
InvalidFileTypeException | Used when file type differs from the desired file type. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
InvalidInputException | This class wraps a list of problems with the input, so that the user can get a list of problems together instead of finding and fixing them one | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
InvalidInputException | This class wraps a list of problems with the input, so that the user can get a list of problems together instead of finding and fixing them one | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
InvalidJobConfException | This exception is thrown when jobconf misses some mendatory attributes or value of some attributes is invalid. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
InvalidPathException | Path string is invalid either because it has invalid characters or due to other file system specific reasons. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
InvalidPathnameException | A path name was invalid. | Class | org.apache.hadoop.registry.client.exceptions | Apache Hadoop |
|
InvalidRecordException | Raised if an attempt to parse a record failed. | Class | org.apache.hadoop.registry.client.exceptions | Apache Hadoop |
|
InvalidStateTransitonException | | Class | org.apache.hadoop.yarn.state | Apache Hadoop |
|
InverseMapper | A Mapper that swaps keys and values. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
InverseMapper | A Mapper that swaps keys and values. | Class | org.apache.hadoop.mapreduce.lib.map | Apache Hadoop |
|
IOUtils | An utility class for I/O related functionality. | Class | org.apache.hadoop.io | Apache Hadoop |
|
IPList | | Interface | org.apache.hadoop.util | Apache Hadoop |
|
JavaSerialization | An experimental Serialization for Java Serializable classes. | Class | org.apache.hadoop.io.serializer | Apache Hadoop |
|
JavaSerializationComparator | A RawComparator that uses a JavaSerialization Deserializer to deserialize objects that are then compared via | Class | org.apache.hadoop.io.serializer | Apache Hadoop |
|
JBoolean | | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
JBuffer | Code generator for "buffer" type. | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
JByte | Code generator for "byte" type. | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
JDouble | | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
JField | A thin wrappper around record field. | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
JFile | Container for the Hadoop Record DDL. | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
JFloat | | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
JInt | | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
JLong | | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
JMap | | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
Job | | Class | org.apache.hadoop.mapred.jobcontrol | Apache Hadoop |
|
Job | The job submitter's view of the Job. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
JobClient | JobClient is the primary interface for the user-job to interact JobClient provides facilities to submit jobs, track their | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
JobConf | A map/reduce job configuration. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
JobConfigurable | That what may be configured. | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
JobContext | | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
JobContext | | Interface | org.apache.hadoop.mapreduce | Apache Hadoop |
|
JobControl | | Class | org.apache.hadoop.mapred.jobcontrol | Apache Hadoop |
|
JobControl | This class encapsulates a set of MapReduce jobs and its dependency. | Class | org.apache.hadoop.mapreduce.lib.jobcontrol | Apache Hadoop |
|
JobCounter | | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
JobID | JobID represents the immutable and unique identifier for the job. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
JobID | JobID represents the immutable and unique identifier for the job. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
JobPriority | Used to describe the priority of the running job. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
JobPriority | Used to describe the priority of the running job. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
JobQueueInfo | Class that contains the information regarding the Job Queues which are maintained by the Hadoop Map/Reduce framework. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
JobStatus | Describes the current status of a job. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
JobStatus | Describes the current status of a job. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
JoinRecordReader | Base class for Composite joins returning Tuples of arbitrary Writables. | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
JoinRecordReader | Base class for Composite joins returning Tuples of arbitrary Writables. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
JRecord | | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
JString | | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
JType | Abstract Base class for all types supported by Hadoop Record I/O. | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
JVector | | Class | org.apache.hadoop.record.compiler | Apache Hadoop |
|
KerberosDelegationTokenAuthenticator | The KerberosDelegationTokenAuthenticator provides support for Kerberos SPNEGO authentication mechanism and support for Hadoop Delegation | Class | org.apache.hadoop.security.token.delegation.web | Apache Hadoop |
|
KeyFieldBasedComparator | This comparator implementation provides a subset of the features provided by the Unix/GNU Sort. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
KeyFieldBasedComparator | This comparator implementation provides a subset of the features provided by the Unix/GNU Sort. | Class | org.apache.hadoop.mapreduce.lib.partition | Apache Hadoop |
|
KeyFieldBasedPartitioner | Defines a way to partition keys based on certain key fields (also see KeyFieldBasedComparator. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
KeyFieldBasedPartitioner | Defines a way to partition keys based on certain key fields (also see KeyFieldBasedComparator. | Class | org.apache.hadoop.mapreduce.lib.partition | Apache Hadoop |
|
KeyProvider | A provider of secret key material for Hadoop applications. | Class | org.apache.hadoop.crypto.key | Apache Hadoop |
|
KeyProviderFactory | A factory to create a list of KeyProvider based on the path given in a Configuration. | Class | org.apache.hadoop.crypto.key | Apache Hadoop |
|
KeyValueLineRecordReader | This class treats a line in the input as a key/value pair separated by a separator character. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
KeyValueLineRecordReader | This class treats a line in the input as a key/value pair separated by a separator character. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
KeyValueTextInputFormat | An InputFormat for plain text files. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
KeyValueTextInputFormat | An InputFormat for plain text files. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
KillApplicationRequest | The request sent by the client to the ResourceManager to abort a submitted application. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
KillApplicationResponse | The response sent by the ResourceManager to the client aborting a submitted application. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
LazyOutputFormat | A Convenience class that creates output lazily. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
LazyOutputFormat | A Convenience class that creates output lazily. | Class | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
LifecycleEvent | A serializable lifecycle event: the time a state transition occurred, and what state was entered. | Class | org.apache.hadoop.service | Apache Hadoop |
|
LocalFileSystem | | Class | org.apache.hadoop.fs | Apache Hadoop |
|
LocalResource | LocalResource represents a local resource required to The NodeManager is responsible for localizing the resource | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
LocalResourceType | enum LocalResourceTypeLocalResourceType specifies the type | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
LocalResourceVisibility | enum LocalResourceVisibilityLocalResourceVisibility specifies the visibility | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
LocatedFileStatus | This class defines a FileStatus that includes a file's block locations. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
LogAggregationContext | LogAggregationContext represents all of the information needed by the NodeManager to handle | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
LoggingStateChangeListener | | Class | org.apache.hadoop.service | Apache Hadoop |
|
LogsCLI | | Class | org.apache.hadoop.yarn.client.cli | Apache Hadoop |
|
LongSumReducer | A Reducer that sums long values. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
LongSumReducer | | Class | org.apache.hadoop.mapreduce.lib.reduce | Apache Hadoop |
|
LongValueMax | This class implements a value aggregator that maintain the maximum of a sequence of long values. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
LongValueMax | This class implements a value aggregator that maintain the maximum of a sequence of long values. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
LongValueMin | This class implements a value aggregator that maintain the minimum of a sequence of long values. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
LongValueMin | This class implements a value aggregator that maintain the minimum of a sequence of long values. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
LongValueSum | This class implements a value aggregator that sums up a sequence of long values. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
LongValueSum | This class implements a value aggregator that sums up a sequence of long values. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
LongWritable | A WritableComparable for longs. | Class | org.apache.hadoop.io | Apache Hadoop |
|
MapContext | The context that is given to the Mapper. | Interface | org.apache.hadoop.mapreduce | Apache Hadoop |
|
MapFile | A file-based map from keys to values. | Class | org.apache.hadoop.io | Apache Hadoop |
|
MapFileOutputFormat | An OutputFormat that writes MapFiles. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
MapFileOutputFormat | | Class | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
Mapper | Maps input key/value pairs to a set of intermediate key/value pairs. | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
Mapper | Maps input key/value pairs to a set of intermediate key/value pairs. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
MapReduceBase | Base class for Mapper and Reducer implementations. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
MapRunnable | Expert: Generic interface for Mappers. | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
MapRunner | Default MapRunnable implementation. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
MapTypeID | | Class | org.apache.hadoop.record.meta | Apache Hadoop |
|
MapWritable | | Class | org.apache.hadoop.io | Apache Hadoop |
|
MarkableIterator | MarkableIterator is a wrapper iterator class that implements the MarkableIteratorInterface. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
MBeans | This util class provides a method to register an MBean using our standard naming convention as described in the doc | Class | org.apache.hadoop.metrics2.util | Apache Hadoop |
|
MD5Hash | A Writable for MD5 hash values. | Class | org.apache.hadoop.io | Apache Hadoop |
|
MetaBlockAlreadyExists | Exception - Meta Block with the same name already exists. | Class | org.apache.hadoop.io.file.tfile | Apache Hadoop |
|
MetaBlockDoesNotExist | Exception - No such Meta Block with the given name. | Class | org.apache.hadoop.io.file.tfile | Apache Hadoop |
|
Metric | Annotation interface for a single metricOptional Element Summary | Class | org.apache.hadoop.metrics2.annotation | Apache Hadoop |
|
Metrics | Annotation interface for a group of metricsRequired Element Summary | Class | org.apache.hadoop.metrics2.annotation | Apache Hadoop |
|
MetricsCache | A metrics cache for sinks that don't support sparse updates. | Class | org.apache.hadoop.metrics2.util | Apache Hadoop |
|
MetricsCollector | | Interface | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricsException | A general metrics exception wrapperSee Also:Serialized Form | Class | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricsFilter | | Class | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricsInfo | | Interface | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricsPlugin | | Interface | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricsRecord | | Interface | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricsRecordBuilder | | Class | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricsRecordImpl | An implementation of MetricsRecord. | Class | org.apache.hadoop.metrics.spi | Apache Hadoop |
|
MetricsRegistry | An optional metrics registry class for creating and maintaining a collection of MetricsMutables, making writing metrics source easier. | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
MetricsSink | The metrics sink interface. | Interface | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricsSource | | Interface | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricsSystem | | Class | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricsSystemMXBean | | Interface | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricsTag | Immutable tag for metrics (for grouping on host/queue/username etc. | Class | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricsVisitor | | Interface | org.apache.hadoop.metrics2 | Apache Hadoop |
|
MetricValue | A Number that is either an absolute or an incremental amount. | Class | org.apache.hadoop.metrics.spi | Apache Hadoop |
|
MigrationTool | This class is a tool for migrating data from an older to a newer version of an S3 filesystem. | Class | org.apache.hadoop.fs.s3 | Apache Hadoop |
|
MoveApplicationAcrossQueuesRequest | The request sent by the client to the ResourceManager to move a submitted application to a different queue. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
MoveApplicationAcrossQueuesResponse | The response sent by the ResourceManager to the client moving a submitted application to a different queue. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
MultiFileInputFormat | An abstract InputFormat that returns MultiFileSplit's in getSplits(JobConf, int) method. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
MultiFileSplit | A sub-collection of input files. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
MultiFilterRecordReader | Base class for Composite join returning values derived from multiple sources, but generally not tuples. | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
MultiFilterRecordReader | Base class for Composite join returning values derived from multiple sources, but generally not tuples. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
MultipleArcTransition | Hook for Transition. | Interface | org.apache.hadoop.yarn.state | Apache Hadoop |
|
MultipleInputs | This class supports MapReduce jobs that have multiple input paths with a different InputFormat and Mapper for each path | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
MultipleInputs | This class supports MapReduce jobs that have multiple input paths with a different InputFormat and Mapper for each path | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
MultipleIOException | Encapsulate a list of IOException into an IOExceptionSee Also:Serialized Form | Class | org.apache.hadoop.io | Apache Hadoop |
|
MultipleOutputFormat | This abstract class extends the FileOutputFormat, allowing to write the output data to different output files. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
MultipleOutputs | The MultipleOutputs class simplifies writting to additional outputs other than the job default output via the OutputCollector passed to | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
MultipleOutputs | The MultipleOutputs class simplifies writing output data to multiple outputs | Class | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
MultipleSequenceFileOutputFormat | This class extends the MultipleOutputFormat, allowing to write the output data to different output files in sequence file output format. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
MultipleTextOutputFormat | This class extends the MultipleOutputFormat, allowing to write the output data to different output files in Text output format. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
MultithreadedMapper | Multithreaded implementation for @link org. | Class | org.apache.hadoop.mapreduce.lib.map | Apache Hadoop |
|
MultithreadedMapRunner | Multithreaded implementation for MapRunnable. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
MutableCounter | | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
MutableCounterInt | | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
MutableCounterLong | | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
MutableGauge | | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
MutableGaugeInt | | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
MutableGaugeLong | | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
MutableMetric | | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
MutableQuantiles | Watches a stream of long values, maintaining online estimates of specific quantiles with provably low error bounds. | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
MutableRate | | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
MutableRates | | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
MutableStat | A mutable metric with stats. | Class | org.apache.hadoop.metrics2.lib | Apache Hadoop |
|
MySQLDataDrivenDBRecordReader | | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
MySQLDBRecordReader | A RecordReader that reads records from a MySQL table. | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
NativeAzureFileSystem | A FileSystem for reading and writing files stored on Windows Azure. | Class | org.apache.hadoop.fs.azure | Apache Hadoop |
|
NativeS3FileSystem | A FileSystem for reading and writing files stored on Unlike S3FileSystem this implementation | Class | org.apache.hadoop.fs.s3native | Apache Hadoop |
|
NLineInputFormat | NLineInputFormat which splits N lines of input as one split. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
NLineInputFormat | NLineInputFormat which splits N lines of input as one split. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
NMClient | | Class | org.apache.hadoop.yarn.client.api | Apache Hadoop |
|
NMClientAsync | NMClientAsync handles communication with all the NodeManagers and provides asynchronous updates on getting responses from them. | Class | org.apache.hadoop.yarn.client.api.async | Apache Hadoop |
|
NMProxy | | Class | org.apache.hadoop.yarn.client | Apache Hadoop |
|
NMToken | The NMToken is used for authenticating communication with It is issued by ResourceMananger when ApplicationMaster | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
NMTokenCache | NMTokenCache manages NMTokens required for an Application Master communicating with individual NodeManagers. | Class | org.apache.hadoop.yarn.client.api | Apache Hadoop |
|
NMTokenIdentifier | | Class | org.apache.hadoop.yarn.security | Apache Hadoop |
|
NoChildrenForEphemeralsException | This is a manifestation of the Zookeeper restrictions about what nodes may act as parents. | Class | org.apache.hadoop.registry.client.exceptions | Apache Hadoop |
|
NodeId | NodeId is the unique identifier for a node. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
NodeReport | NodeReport is a summary of runtime information of a node It includes details such as: | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
NodeState | | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
NoEmitMetricsContext | A MetricsContext that does not emit data, but, unlike NullContextWithUpdate, does save it for retrieval with getAllRecords(). | Class | org.apache.hadoop.metrics.spi | Apache Hadoop |
|
NoRecordException | Raised if there is no ServiceRecord resolved at the end of the specified path. | Class | org.apache.hadoop.registry.client.exceptions | Apache Hadoop |
|
NotInMountpointException | NotInMountpointException extends the UnsupportedOperationException. | Class | org.apache.hadoop.fs.viewfs | Apache Hadoop |
|
NullContext | Null metrics context: a metrics context which does nothing. | Class | org.apache.hadoop.metrics.spi | Apache Hadoop |
|
NullContextWithUpdateThread | A null context which has a thread calling periodically when monitoring is started. | Class | org.apache.hadoop.metrics.spi | Apache Hadoop |
|
NullOutputFormat | Consume all outputs and put them in /dev/null. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
NullOutputFormat | Consume all outputs and put them in /dev/null. | Class | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
NullWritable | Singleton Writable with no data. | Class | org.apache.hadoop.io | Apache Hadoop |
|
ObjectWritable | A polymorphic Writable that writes an instance with it's class name. | Class | org.apache.hadoop.io | Apache Hadoop |
|
Options | This class contains options related to file system operations. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
OracleDataDrivenDBInputFormat | A InputFormat that reads input data from an SQL table in an Oracle db. | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
OracleDataDrivenDBRecordReader | | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
OracleDateSplitter | Make use of logic from DateSplitter, since this just needs to use some Oracle-specific functions on the formatting end when generating | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
OracleDBRecordReader | A RecordReader that reads records from an Oracle SQL table. | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
OuterJoinRecordReader | | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
OuterJoinRecordReader | | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
OutputCollector | Collects the pairs output by Mappers OutputCollector is the generalization of the facility | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
OutputCommitter | OutputCommitter describes the commit of task output for a The Map-Reduce framework relies on the OutputCommitter of | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
OutputCommitter | OutputCommitter describes the commit of task output for a The Map-Reduce framework relies on the OutputCommitter of | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
OutputFormat | OutputFormat describes the output-specification for a The Map-Reduce framework relies on the OutputFormat of the | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
OutputFormat | OutputFormat describes the output-specification for a The Map-Reduce framework relies on the OutputFormat of the | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
OutputLogFilter | This class filters log files from directory given It doesnt accept paths having _logs. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
OutputRecord | Represents a record of metric data to be sent to a metrics system. | Class | org.apache.hadoop.metrics.spi | Apache Hadoop |
|
OverrideRecordReader | Prefer the "rightmost" data source for this key. | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
OverrideRecordReader | Prefer the "rightmost" data source for this key. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
ParentNotDirectoryException | Indicates that the parent of specified Path is not a directorySee Also:Serialized Form | Class | org.apache.hadoop.fs | Apache Hadoop |
|
ParseException | This exception is thrown when parse errors are encountered. | Class | org.apache.hadoop.record.compiler.generated | Apache Hadoop |
|
Parser | Very simple shift-reduce parser for join expressions. | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
Parser | Very simple shift-reduce parser for join expressions. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
PartialFileOutputCommitter | An OutputCommitter that commits files specified in job output directory i. | Class | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
PartialOutputCommitter | Interface for an OutputCommitter implementing partial commit of task output, as during preemption. | Interface | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
Partitioner | Partitions the key space. | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
Partitioner | Partitions the key space. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
Path | Names a file or directory in a FileSystem. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
PathFilter | | Interface | org.apache.hadoop.fs | Apache Hadoop |
|
PositionedReadable | Stream that permits positional reading. | Interface | org.apache.hadoop.fs | Apache Hadoop |
|
PreemptionContainer | Specific container requested back by the ResourceManager. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
PreemptionContract | Description of resources requested back by the ResourceManager. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
PreemptionMessage | A PreemptionMessage is part of the RM-AM protocol, and it is used by the RM to specify resources that the RM wants to reclaim from this | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
PreemptionResourceRequest | Description of resources requested back by the cluster. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
Priority | | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
Progressable | A facility for reporting progress. | Interface | org.apache.hadoop.util | Apache Hadoop |
|
ProtocolTypes | | Interface | org.apache.hadoop.registry.client.types | Apache Hadoop |
|
PseudoDelegationTokenAuthenticator | The PseudoDelegationTokenAuthenticator provides support for Hadoop's pseudo authentication mechanism that accepts | Class | org.apache.hadoop.security.token.delegation.web | Apache Hadoop |
|
PureJavaCrc32 | A pure-java implementation of the CRC32 checksum that uses the same polynomial as the built-in native CRC32. | Class | org.apache.hadoop.util | Apache Hadoop |
|
PureJavaCrc32C | A pure-java implementation of the CRC32 checksum that uses the CRC32-C polynomial, the same polynomial used by iSCSI | Class | org.apache.hadoop.util | Apache Hadoop |
|
QueueACL | QueueACL enumerates the various ACLs for queues. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
QueueAclsInfo | | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
QueueInfo | Class that contains the information regarding the Job Queues which are maintained by the Hadoop Map/Reduce framework. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
QueueInfo | QueueInfo is a report of the runtime information of the queue. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
QueueState | Enum representing queue stateEnum Constant Summary | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
QueueState | A queue is in one of: RUNNING - normal state. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
QueueUserACLInfo | QueueUserACLInfo provides information QueueACL forApplicationClientProtocol. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
RawComparable | Interface for objects that can be compared through RawComparator. | Interface | org.apache.hadoop.io.file.tfile | Apache Hadoop |
|
RawComparator | A Comparator that operates directly on byte representations ofSee Also:DeserializerComparator | Interface | org.apache.hadoop.io | Apache Hadoop |
|
RawLocalFileSystem | | Class | org.apache.hadoop.fs | Apache Hadoop |
|
Rcc | | Class | org.apache.hadoop.record.compiler.generated | Apache Hadoop |
|
RccConstants | | Interface | org.apache.hadoop.record.compiler.generated | Apache Hadoop |
|
RccTask | Hadoop record compiler ant Task This task takes the given record definition files and compiles them into | Class | org.apache.hadoop.record.compiler.ant | Apache Hadoop |
|
RccTokenManager | | Class | org.apache.hadoop.record.compiler.generated | Apache Hadoop |
|
ReadOption | Options that can be used when reading from a FileSystem. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
ReconfigurationTaskStatus | | Class | org.apache.hadoop.conf | Apache Hadoop |
|
Record | Abstract class that is extended by generated classes. | Class | org.apache.hadoop.record | Apache Hadoop |
|
RecordComparator | | Class | org.apache.hadoop.record | Apache Hadoop |
|
RecordInput | Interface that all the Deserializers have to implement. | Interface | org.apache.hadoop.record | Apache Hadoop |
|
RecordOutput | Interface that all the serializers have to implement. | Interface | org.apache.hadoop.record | Apache Hadoop |
|
RecordReader | RecordReader reads pairs from an RecordReader, typically, converts the byte-oriented view of | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
RecordReader | | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
RecordTypeInfo | A record's Type Information object which can read/write itself. | Class | org.apache.hadoop.record.meta | Apache Hadoop |
|
RecordWriter | RecordWriter writes the output pairs RecordWriter implementations write the job outputs to the | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
RecordWriter | RecordWriter writes the output pairs RecordWriter implementations write the job outputs to the | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
ReduceContext | The context passed to the Reducer. | Interface | org.apache.hadoop.mapreduce | Apache Hadoop |
|
Reducer | Reduces a set of intermediate values which share a key to a smaller set of The number of Reducers for the job is set by the user via | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
Reducer | Reduces a set of intermediate values which share a key to a smaller set of Reducer implementations | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
ReflectionUtils | | Class | org.apache.hadoop.util | Apache Hadoop |
|
RegexFilter | | Class | org.apache.hadoop.metrics2.filter | Apache Hadoop |
|
RegexMapper | A Mapper that extracts text matching a regular expression. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
RegexMapper | A Mapper that extracts text matching a regular expression. | Class | org.apache.hadoop.mapreduce.lib.map | Apache Hadoop |
|
RegisterApplicationMasterRequest | The request sent by the ApplicationMaster to ResourceManager The registration includes details such as: | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
RegisterApplicationMasterResponse | The response sent by the ResourceManager to a new ApplicationMaster on registration. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
RegistryBindingSource | | Interface | org.apache.hadoop.registry.client.impl.zk | Apache Hadoop |
|
RegistryConstants | | Interface | org.apache.hadoop.registry.client.api | Apache Hadoop |
|
RegistryIOException | Base exception for registry operations. | Class | org.apache.hadoop.registry.client.exceptions | Apache Hadoop |
|
RegistryOperations | | Interface | org.apache.hadoop.registry.client.api | Apache Hadoop |
|
RegistryOperationsClient | This is the client service for applications to work with the registry. | Class | org.apache.hadoop.registry.client.impl | Apache Hadoop |
|
RegistryOperationsService | The Registry operations service. | Class | org.apache.hadoop.registry.client.impl.zk | Apache Hadoop |
|
RegistryPathStatus | Output of a RegistryOperations. | Class | org.apache.hadoop.registry.client.types | Apache Hadoop |
|
RegistryTypeUtils | | Class | org.apache.hadoop.registry.client.binding | Apache Hadoop |
|
RegistryUtils | Utility methods for working with a registry. | Class | org.apache.hadoop.registry.client.binding | Apache Hadoop |
|
ReleaseSharedCacheResourceRequest | The request from clients to release a resource in the shared cache. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
ReleaseSharedCacheResourceResponse | The response to clients from the SharedCacheManager when releasing a resource in the shared cache. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
RemoveScheme | Defines the different remove scheme for retouched Bloom filters. | Interface | org.apache.hadoop.util.bloom | Apache Hadoop |
|
RenewDelegationTokenRequest | The request issued by the client to renew a delegation token from the ResourceManager. | Interface | org.apache.hadoop.mapreduce.v2.api.protocolrecords | Apache Hadoop |
|
RenewDelegationTokenResponse | The response to a renewDelegationToken call to the ResourceManager. | Interface | org.apache.hadoop.mapreduce.v2.api.protocolrecords | Apache Hadoop |
|
Reporter | A facility for Map-Reduce applications to report progress and update counters, status information etc. | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
ReservationDefinition | ReservationDefinition captures the set of resource and time constraints the user cares about regarding a reservation. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ReservationDeleteRequest | ReservationDeleteRequest captures the set of requirements the user has to delete an existing reservation. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
ReservationDeleteResponse | ReservationDeleteResponse contains the answer of the admission control system in the ResourceManager to a reservation delete | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
ReservationId | ReservationId represents the globally unique identifier for The globally unique nature of the identifier is achieved by using the | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ReservationRequest | ReservationRequest represents the request made by an application to the ResourceManager to reserve Resources. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ReservationRequestInterpreter | enum ReservationRequestInterpreterEnumeration of various types of dependencies among multiple | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ReservationRequests | ReservationRequests captures the set of resource and constraints the user cares about regarding a reservation. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ReservationSubmissionRequest | ReservationSubmissionRequest captures the set of requirements the user has to create a reservation. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
ReservationSubmissionResponse | ReservationSubmissionResponse contains the answer of the admission control system in the ResourceManager to a reservation create | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
ReservationUpdateRequest | ReservationUpdateRequest captures the set of requirements the user has to update an existing reservation. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
ReservationUpdateResponse | ReservationUpdateResponse contains the answer of the admission control system in the ResourceManager to a reservation update | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
ResetableIterator | This defines an interface to a stateful Iterator that can replay elements added to it directly. | Interface | org.apache.hadoop.mapred.join | Apache Hadoop |
|
ResetableIterator | This defines an interface to a stateful Iterator that can replay elements added to it directly. | Interface | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
Resource | Resource models a set of computer resources in the Currently it models both memory and CPU. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ResourceBlacklistRequest | ResourceBlacklistRequest encapsulates the list of resource-names which should be added or removed from the blacklist of resources | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ResourceCalculatorProcessTree | Interface class to obtain process resource usage NOTE: This class should not be used by external users, but only by external | Class | org.apache.hadoop.yarn.util | Apache Hadoop |
|
ResourceOption | | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
ResourceRequest | ResourceRequest represents the request made by an application to the ResourceManager | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
RetouchedBloomFilter | It allows the removal of selected false positives at the cost of introducing random false negatives, and with the benefit of eliminating some random false | Class | org.apache.hadoop.util.bloom | Apache Hadoop |
|
RMDelegationTokenIdentifier | | Class | org.apache.hadoop.yarn.security.client | Apache Hadoop |
|
RMDelegationTokenSelector | | Class | org.apache.hadoop.yarn.security.client | Apache Hadoop |
|
RMProxy | | Class | org.apache.hadoop.yarn.client | Apache Hadoop |
|
RunningJob | RunningJob is the user-interface to query for details on a running Map-Reduce job. | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
S3Exception | Thrown if there is a problem communicating with Amazon S3. | Class | org.apache.hadoop.fs.s3 | Apache Hadoop |
|
S3FileSystem | A block-based FileSystem backed bySee Also:NativeS3FileSystem | Class | org.apache.hadoop.fs.s3 | Apache Hadoop |
|
S3FileSystemException | Thrown when there is a fatal exception while using S3FileSystem. | Class | org.apache.hadoop.fs.s3 | Apache Hadoop |
|
SchedulerSecurityInfo | | Class | org.apache.hadoop.yarn.security | Apache Hadoop |
|
ScriptBasedMapping | This class implements the DNSToSwitchMapping interface using a script configured via the | Class | org.apache.hadoop.net | Apache Hadoop |
|
Seekable | Stream that permits seeking. | Interface | org.apache.hadoop.fs | Apache Hadoop |
|
SequenceFile | SequenceFiles are flat files consisting of binary key/value SequenceFile provides SequenceFile. | Class | org.apache.hadoop.io | Apache Hadoop |
|
SequenceFileAsBinaryInputFormat | | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
SequenceFileAsBinaryInputFormat | | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
SequenceFileAsBinaryOutputFormat | An OutputFormat that writes keys, values to SequenceFiles in binary(raw) format | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
SequenceFileAsBinaryOutputFormat | An OutputFormat that writes keys, values to SequenceFiles in binary(raw) format | Class | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
SequenceFileAsTextInputFormat | This class is similar to SequenceFileInputFormat, except it generates SequenceFileAsTextRecordReader | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
SequenceFileAsTextInputFormat | This class is similar to SequenceFileInputFormat, except it generates SequenceFileAsTextRecordReader which converts the input keys and values | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
SequenceFileAsTextRecordReader | This class converts the input keys and values to their String forms by calling toString() method. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
SequenceFileAsTextRecordReader | This class converts the input keys and values to their String forms by calling toString() method. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
SequenceFileInputFilter | A class that allows a map/red job to work on a sample of sequence files. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
SequenceFileInputFilter | A class that allows a map/red job to work on a sample of sequence files. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
SequenceFileInputFormat | An InputFormat for SequenceFiles. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
SequenceFileInputFormat | An InputFormat for SequenceFiles. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
SequenceFileOutputFormat | An OutputFormat that writes SequenceFiles. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
SequenceFileOutputFormat | An OutputFormat that writes SequenceFiles. | Class | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
SequenceFileRecordReader | An RecordReader for SequenceFiles. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
SequenceFileRecordReader | An RecordReader for SequenceFiles. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
ServerProxy | | Class | org.apache.hadoop.yarn.client | Apache Hadoop |
|
Servers | | Class | org.apache.hadoop.metrics2.util | Apache Hadoop |
|
Service | | Interface | org.apache.hadoop.service | Apache Hadoop |
|
ServiceFailedException | Exception thrown to indicate that an operation performed to modify the state of a service or application failed. | Class | org.apache.hadoop.ha | Apache Hadoop |
|
ServiceOperations | This class contains a set of methods to work with services, especially to walk them through their lifecycle. | Class | org.apache.hadoop.service | Apache Hadoop |
|
ServiceRecord | JSON-marshallable description of a single component. | Class | org.apache.hadoop.registry.client.types | Apache Hadoop |
|
ServiceStateChangeListener | Interface to notify state changes of a service. | Interface | org.apache.hadoop.service | Apache Hadoop |
|
ServiceStateException | Exception that is raised on state change operations. | Class | org.apache.hadoop.service | Apache Hadoop |
|
ServiceStateModel | | Class | org.apache.hadoop.service | Apache Hadoop |
|
SetFile | A file-based set of keys. | Class | org.apache.hadoop.io | Apache Hadoop |
|
SharedCacheChecksum | | Interface | org.apache.hadoop.yarn.sharedcache | Apache Hadoop |
|
SharedCacheChecksumFactory | | Class | org.apache.hadoop.yarn.sharedcache | Apache Hadoop |
|
SharedCacheClient | This is the client for YARN's shared cache. | Class | org.apache.hadoop.yarn.client.api | Apache Hadoop |
|
ShortWritable | A WritableComparable for shorts. | Class | org.apache.hadoop.io | Apache Hadoop |
|
SimpleCharStream | An implementation of interface CharStream, where the stream is assumed to contain only ASCII characters (without unicode processing). | Class | org.apache.hadoop.record.compiler.generated | Apache Hadoop |
|
SingleArcTransition | Hook for Transition. | Interface | org.apache.hadoop.yarn.state | Apache Hadoop |
|
SkipBadRecords | Utility class for skip bad records functionality. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
SocksSocketFactory | | Class | org.apache.hadoop.net | Apache Hadoop |
|
SortedMapWritable | A Writable SortedMap. | Class | org.apache.hadoop.io | Apache Hadoop |
|
SpanReceiverInfo | | Class | org.apache.hadoop.tracing | Apache Hadoop |
|
SpanReceiverInfoBuilder | | Class | org.apache.hadoop.tracing | Apache Hadoop |
|
SplitCompressionInputStream | An InputStream covering a range of compressed data. | Class | org.apache.hadoop.io.compress | Apache Hadoop |
|
SplitLocationInfo | | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
SplittableCompressionCodec | This interface is meant to be implemented by those compression codecs which are capable to compress / de-compress a stream starting at any | Interface | org.apache.hadoop.io.compress | Apache Hadoop |
|
StandardSocketFactory | | Class | org.apache.hadoop.net | Apache Hadoop |
|
StartContainerRequest | The request sent by the ApplicationMaster to the NodeManager to start a container. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
StartContainersRequest | The request which contains a list of StartContainerRequest sent by the ApplicationMaster to the NodeManager to | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
StartContainersResponse | The response sent by the NodeManager to the ApplicationMaster when asked to start an allocated | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
StateMachine | | Interface | org.apache.hadoop.yarn.state | Apache Hadoop |
|
StateMachineFactory | State machine topology. | Class | org.apache.hadoop.yarn.state | Apache Hadoop |
|
StopContainersRequest | The request sent by the ApplicationMaster to the NodeManager to stop containers. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
StopContainersResponse | The response sent by the NodeManager to the ApplicationMaster when asked to stop allocated | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
StorageType | Defines the types of supported storage media. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
StreamBackedIterator | This class provides an implementation of ResetableIterator. | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
StreamBackedIterator | This class provides an implementation of ResetableIterator. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
StrictPreemptionContract | Enumeration of particular allocations to be reclaimed. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
Stringifier | Stringifier interface offers two methods to convert an object to a string representation and restore the object given its | Interface | org.apache.hadoop.io | Apache Hadoop |
|
StringInterner | Provides equivalent behavior to String. | Class | org.apache.hadoop.util | Apache Hadoop |
|
StringValueMax | This class implements a value aggregator that maintain the biggest of a sequence of strings. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
StringValueMax | This class implements a value aggregator that maintain the biggest of a sequence of strings. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
StringValueMin | This class implements a value aggregator that maintain the smallest of a sequence of strings. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
StringValueMin | This class implements a value aggregator that maintain the smallest of a sequence of strings. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
StructTypeID | | Class | org.apache.hadoop.record.meta | Apache Hadoop |
|
SubmitApplicationRequest | The request sent by a client to submit an application to the The request, via ApplicationSubmissionContext, contains | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
SubmitApplicationResponse | The response sent by the ResourceManager to a client on application submission. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
Submitter | The main entry point and job submitter. | Class | org.apache.hadoop.mapred.pipes | Apache Hadoop |
|
Syncable | This interface for flush/sync operation. | Interface | org.apache.hadoop.fs | Apache Hadoop |
|
SystemClock | clock in milliseconds. | Class | org.apache.hadoop.yarn.util | Apache Hadoop |
|
TableMapping | Simple DNSToSwitchMapping implementation that reads a 2 column text file. | Class | org.apache.hadoop.net | Apache Hadoop |
|
TaskAttemptContext | | Interface | org.apache.hadoop.mapred | Apache Hadoop |
|
TaskAttemptContext | The context for task attempts. | Interface | org.apache.hadoop.mapreduce | Apache Hadoop |
|
TaskAttemptID | TaskAttemptID represents the immutable and unique identifier for a task attempt. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
TaskAttemptID | TaskAttemptID represents the immutable and unique identifier for a task attempt. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
TaskCompletionEvent | | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
TaskCompletionEvent | | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
TaskCounter | | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
TaskID | TaskID represents the immutable and unique identifier for a Map or Reduce Task. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
TaskID | TaskID represents the immutable and unique identifier for a Map or Reduce Task. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
TaskInputOutputContext | A context object that allows input and output from the task. | Interface | org.apache.hadoop.mapreduce | Apache Hadoop |
|
TaskReport | A report on the state of a task. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
TaskTrackerInfo | Information about TaskTracker. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
TaskType | Enum for map, reduce, job-setup, job-cleanup, task-cleanup task types. | Class | org.apache.hadoop.mapreduce | Apache Hadoop |
|
Text | This class stores text using standard UTF8 encoding. | Class | org.apache.hadoop.io | Apache Hadoop |
|
TextInputFormat | An InputFormat for plain text files. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
TextInputFormat | An InputFormat for plain text files. | Class | org.apache.hadoop.mapreduce.lib.input | Apache Hadoop |
|
TextOutputFormat | An OutputFormat that writes plain text files. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
TextOutputFormat | An OutputFormat that writes plain text files. | Class | org.apache.hadoop.mapreduce.lib.output | Apache Hadoop |
|
TextSplitter | | Class | org.apache.hadoop.mapreduce.lib.db | Apache Hadoop |
|
TFile | A TFile is a container of key-value pairs. | Class | org.apache.hadoop.io.file.tfile | Apache Hadoop |
|
TimelineClient | A client library that can be used to post some information in terms of a number of conceptual entities. | Class | org.apache.hadoop.yarn.client.api | Apache Hadoop |
|
TimelineDelegationTokenIdentifier | | Class | org.apache.hadoop.yarn.security.client | Apache Hadoop |
|
TimelineDelegationTokenResponse | | Class | org.apache.hadoop.yarn.api.records.timeline | Apache Hadoop |
|
TimelineDelegationTokenSelector | | Class | org.apache.hadoop.yarn.security.client | Apache Hadoop |
|
TimelineDomain | This class contains the information about a timeline domain, which is used to a user to host a number of timeline entities, isolating them from others'. | Class | org.apache.hadoop.yarn.api.records.timeline | Apache Hadoop |
|
TimelineDomains | The class that hosts a list of timeline domains. | Class | org.apache.hadoop.yarn.api.records.timeline | Apache Hadoop |
|
TimelineEntities | The class that hosts a list of timeline entities. | Class | org.apache.hadoop.yarn.api.records.timeline | Apache Hadoop |
|
TimelineEntity | The class that contains the the meta information of some conceptual entity and its related events. | Class | org.apache.hadoop.yarn.api.records.timeline | Apache Hadoop |
|
TimelineEvent | The class that contains the information of an event that is related to some conceptual entity of an application. | Class | org.apache.hadoop.yarn.api.records.timeline | Apache Hadoop |
|
TimelineEvents | The class that hosts a list of events, which are categorized according to their related entities. | Class | org.apache.hadoop.yarn.api.records.timeline | Apache Hadoop |
|
TimelinePutResponse | A class that holds a list of put errors. | Class | org.apache.hadoop.yarn.api.records.timeline | Apache Hadoop |
|
TimelineUtils | The helper class for the timeline module. | Class | org.apache.hadoop.yarn.util.timeline | Apache Hadoop |
|
Token | Describes the input token stream. | Class | org.apache.hadoop.record.compiler.generated | Apache Hadoop |
|
Token | Token is the security entity used by the framework to verify authenticity of any resource. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
TokenCache | This class provides user facing APIs for transferring secrets from the job client to the tasks. | Class | org.apache.hadoop.mapreduce.security | Apache Hadoop |
|
TokenCounterMapper | Tokenize the input values and emit each word with a count of 1. | Class | org.apache.hadoop.mapreduce.lib.map | Apache Hadoop |
|
TokenCountMapper | A Mapper that maps text values into pairs. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
TokenMgrError | | Class | org.apache.hadoop.record.compiler.generated | Apache Hadoop |
|
Tool | A tool interface that supports handling of generic command-line options. | Interface | org.apache.hadoop.util | Apache Hadoop |
|
ToolRunner | A utility to help run Tools. | Class | org.apache.hadoop.util | Apache Hadoop |
|
TotalOrderPartitioner | Partitioner effecting a total order by reading split points from an externally generated source. | Class | org.apache.hadoop.mapred.lib | Apache Hadoop |
|
TotalOrderPartitioner | Partitioner effecting a total order by reading split points from an externally generated source. | Class | org.apache.hadoop.mapreduce.lib.partition | Apache Hadoop |
|
TraceAdminProtocol | Protocol interface that provides tracing. | Interface | org.apache.hadoop.tracing | Apache Hadoop |
|
TraceAdminProtocolPB | | Interface | org.apache.hadoop.tracing | Apache Hadoop |
|
Trash | Provides a trash facility which supports pluggable Trash policies. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
TrashPolicy | This interface is used for implementing different Trash policies. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
TupleWritable | Writable type storing multiple Writables. | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
TupleWritable | Writable type storing multiple Writables. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
TwoDArrayWritable | A Writable for 2D arrays containing a matrix of instances of a class. | Class | org.apache.hadoop.io | Apache Hadoop |
|
TypeID | Represents typeID for basic types. | Class | org.apache.hadoop.record.meta | Apache Hadoop |
|
UniqValueCount | This class implements a value aggregator that dedupes a sequence of objects. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
UniqValueCount | This class implements a value aggregator that dedupes a sequence of objects. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
UnsupportedFileSystemException | File system for a given file system name/scheme is not supportedSee Also:Serialized Form | Class | org.apache.hadoop.fs | Apache Hadoop |
|
URL | URL represents a serializable URL. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
UserDefinedValueAggregatorDescriptor | This class implements a wrapper for a user defined value aggregator It serves two functions: One is to create an object of | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
UserDefinedValueAggregatorDescriptor | This class implements a wrapper for a user defined value aggregator descriptor. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
UseSharedCacheResourceRequest | The request from clients to the SharedCacheManager that claims a resource in the shared cache. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
UseSharedCacheResourceResponse | The response from the SharedCacheManager to the client that indicates whether a requested resource exists in the cache. | Class | org.apache.hadoop.yarn.api.protocolrecords | Apache Hadoop |
|
UTCClock | | Class | org.apache.hadoop.yarn.util | Apache Hadoop |
|
Util | | Class | org.apache.hadoop.metrics.spi | Apache Hadoop |
|
Utils | Supporting Utility classes used by TFile, and shared by users of TFile. | Class | org.apache.hadoop.io.file.tfile | Apache Hadoop |
|
Utils | A utility class. | Class | org.apache.hadoop.mapred | Apache Hadoop |
|
Utils | Various utility functions for Hadooop record I/O platform. | Class | org.apache.hadoop.record.meta | Apache Hadoop |
|
Utils | Various utility functions for Hadoop record I/O runtime. | Class | org.apache.hadoop.record | Apache Hadoop |
|
ValueAggregator | This interface defines the minimal protocol for value aggregators. | Interface | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
ValueAggregator | This interface defines the minimal protocol for value aggregators. | Interface | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
ValueAggregatorBaseDescriptor | This class implements the common functionalities of the subclasses of ValueAggregatorDescriptor class. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
ValueAggregatorBaseDescriptor | This class implements the common functionalities of the subclasses of ValueAggregatorDescriptor class. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
ValueAggregatorCombiner | This class implements the generic combiner of Aggregate. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
ValueAggregatorCombiner | This class implements the generic combiner of Aggregate. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
ValueAggregatorDescriptor | This interface defines the contract a value aggregator descriptor must support. | Interface | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
ValueAggregatorDescriptor | This interface defines the contract a value aggregator descriptor must support. | Interface | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
ValueAggregatorJob | This is the main class for creating a map/reduce job using Aggregate framework. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
ValueAggregatorJob | This is the main class for creating a map/reduce job using Aggregate framework. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
ValueAggregatorJobBase | This abstract class implements some common functionalities of the the generic mapper, reducer and combiner classes of Aggregate. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
ValueAggregatorJobBase | This abstract class implements some common functionalities of the the generic mapper, reducer and combiner classes of Aggregate. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
ValueAggregatorMapper | This class implements the generic mapper of Aggregate. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
ValueAggregatorMapper | This class implements the generic mapper of Aggregate. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
ValueAggregatorReducer | This class implements the generic reducer of Aggregate. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
ValueAggregatorReducer | This class implements the generic reducer of Aggregate. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
ValueHistogram | This class implements a value aggregator that computes the histogram of a sequence of strings. | Class | org.apache.hadoop.mapred.lib.aggregate | Apache Hadoop |
|
ValueHistogram | This class implements a value aggregator that computes the histogram of a sequence of strings. | Class | org.apache.hadoop.mapreduce.lib.aggregate | Apache Hadoop |
|
VectorTypeID | Represents typeID for vector. | Class | org.apache.hadoop.record.meta | Apache Hadoop |
|
VersionedWritable | A base class for Writables that provides version checking. | Class | org.apache.hadoop.io | Apache Hadoop |
|
VersionMismatchException | Thrown when Hadoop cannot read the version of the data storedSee Also:Serialized Form | Class | org.apache.hadoop.fs.s3 | Apache Hadoop |
|
VersionMismatchException | Thrown by VersionedWritable. | Class | org.apache.hadoop.io | Apache Hadoop |
|
ViewFileSystem | ViewFileSystem (extends the FileSystem interface) implements a client-side mount table. | Class | org.apache.hadoop.fs.viewfs | Apache Hadoop |
|
ViewFs | ViewFs (extends the AbstractFileSystem interface) implements a client-side mount table. | Class | org.apache.hadoop.fs.viewfs | Apache Hadoop |
|
VIntWritable | A WritableComparable for integer values stored in variable-length format. | Class | org.apache.hadoop.io | Apache Hadoop |
|
VLongWritable | A WritableComparable for longs in a variable-length format. | Class | org.apache.hadoop.io | Apache Hadoop |
|
VolumeId | Opaque interface that identifies a disk location. | Interface | org.apache.hadoop.fs | Apache Hadoop |
|
Wasb | WASB implementation of AbstractFileSystem. | Class | org.apache.hadoop.fs.azure | Apache Hadoop |
|
WasbFsck | An fsck tool implementation for WASB that does various admin/cleanup/recovery tasks on the WASB file system. | Class | org.apache.hadoop.fs.azure | Apache Hadoop |
|
WrappedMapper | A Mapper which wraps a given one to allow custom Mapper. | Class | org.apache.hadoop.mapreduce.lib.map | Apache Hadoop |
|
WrappedRecordReader | Proxy class for a RecordReader participating in the join framework. | Class | org.apache.hadoop.mapred.join | Apache Hadoop |
|
WrappedRecordReader | Proxy class for a RecordReader participating in the join framework. | Class | org.apache.hadoop.mapreduce.lib.join | Apache Hadoop |
|
WrappedReducer | A Reducer which wraps a given one to allow for custom Reducer. | Class | org.apache.hadoop.mapreduce.lib.reduce | Apache Hadoop |
|
Writable | A serializable object which implements a simple, efficient, serialization protocol, based on DataInput and DataOutput. | Interface | org.apache.hadoop.io | Apache Hadoop |
|
WritableComparable | A Writable which is also Comparable. | Interface | org.apache.hadoop.io | Apache Hadoop |
|
WritableComparator | A Comparator for WritableComparables. | Class | org.apache.hadoop.io | Apache Hadoop |
|
WritableFactories | Factories for non-public writables. | Class | org.apache.hadoop.io | Apache Hadoop |
|
WritableFactory | A factory for a class of Writable. | Interface | org.apache.hadoop.io | Apache Hadoop |
|
WritableSerialization | A Serialization for Writables that delegates to Writable. | Class | org.apache.hadoop.io.serializer | Apache Hadoop |
|
WritableUtils | | Class | org.apache.hadoop.io | Apache Hadoop |
|
XAttrCodec | The value of XAttr is byte[], this class is to covert byte[] to some kind of string representation or convert back. | Class | org.apache.hadoop.fs | Apache Hadoop |
|
XAttrSetFlag | | Class | org.apache.hadoop.fs | Apache Hadoop |
|
XmlRecordInput | | Class | org.apache.hadoop.record | Apache Hadoop |
|
XmlRecordOutput | | Class | org.apache.hadoop.record | Apache Hadoop |
|
YarnApplicationAttemptState | enum YarnApplicationAttemptStateEnumeration of various states of a RMAppAttempt. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
YarnApplicationState | enum YarnApplicationStateEnumeration of various states of an ApplicationMaster. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
YarnClient | | Class | org.apache.hadoop.yarn.client.api | Apache Hadoop |
|
YarnClientApplication | | Class | org.apache.hadoop.yarn.client.api | Apache Hadoop |
|
YarnClusterMetrics | YarnClusterMetrics represents cluster metrics. | Class | org.apache.hadoop.yarn.api.records | Apache Hadoop |
|
YarnConfiguration | | Class | org.apache.hadoop.yarn.conf | Apache Hadoop |
|
YarnException | YarnException indicates exceptions from yarn servers. | Class | org.apache.hadoop.yarn.exceptions | Apache Hadoop |
|
YarnUncaughtExceptionHandler | This class is intended to be installed by calling Thread. | Class | org.apache.hadoop.yarn | Apache Hadoop |
|
ZKFCProtocolPB | | Interface | org.apache.hadoop.ha.protocolPB | Apache Hadoop |